In recent years, multi-scale generative adversarial networks (GANs) have been proposed to build generalized image processing models based on single sample. Constraining on the sample size, multi-scale GANs have much difficulty converging to the global optimum, which ultimately leads to limitations in their capabilities. In this paper, we pioneered the introduction of PAC-Bayes generalized bound theory into the training analysis of specific models under different adversarial training methods, which can obtain a non-vacuous upper bound on the generalization error for the specified multi-scale GAN structure. Based on the drastic changes we found of the generalization error bound under different adversarial attacks and different training states, we proposed an adaptive training method which can greatly improve the image manipulation ability of multi-scale GANs. The final experimental results show that our adaptive training method in this paper has greatly contributed to the improvement of the quality of the images generated by multi-scale GANs on several image manipulation tasks. In particular, for the image super-resolution restoration task, the multi-scale GAN model trained by the proposed method achieves a 100% reduction in natural image quality evaluator (NIQE) and a 60% reduction in root mean squared error (RMSE), which is better than many models trained on large-scale datasets.
translated by 谷歌翻译
本文介绍了一种基于代理的互相关(SBCC)框架,以提高两个图像信号之间的相关性能。 SBCC背后的基本思想是,提供一个原始图像的优化代理滤波器/图像将产生更强大的且更准确的相关信号。 SBCC的互相关估计与由替代丢失和相关稠度损失组成的目标函数。闭合溶液提供有效的估计。为了我们的意外,SBCC框架可以提供替代视图来解释一组广义互相关(GCC)方法并理解参数的含义。在我们的SBCC框架的帮助下,我们进一步提出了四种新的特定互联方法,并提供了一些提高现有GCC方法的建议。值得注意的事实是,通过纳入其他否定的上下文图像,SBCC可以增强相关性鲁棒性。考虑到粒子图像VELOCIMETRY(PIV)的子像素精度和鲁棒性要求,用粒子图像研究了目标函数中的每个术语的贡献。与最先进的基线方法相比,SBCC方法在合成数据集中表现出改善的性能(准确性和鲁棒性)和一些具有挑战性的真实实验PIV病例。
translated by 谷歌翻译
核细胞分割是数字病理分析中的基本任务,可以通过基于深度学习的方法自动化。然而,这种自动化方法的发展需要大量数据具有精确的注释掩模,这很难获得。具有弱标记数据的培训是减少注释工作量的流行解决方案。在本文中,我们提出了一种新的基于元学习的核细胞分段方法,其跟随标签校正范例,以利用嘈杂的面具利用数据。具体而言,我们设计一个完全传统的元模型,可以使用少量清洁的元数据来纠正嘈杂的掩模。然后,纠正的掩模可用于监督分割模型的训练。同时,采用双级优化方法来交替地以端到端的方式更新主要分段模型和元模型的参数。两个核细分数据集的广泛实验结果表明,我们的方法实现了最先进的结果。它甚至可以在一些嘈杂的设置中实现了对监督数据的模型培训相当的性能。
translated by 谷歌翻译
数字病理学在医疗领域的人工智能发展中起着至关重要的作用。数字病理平台可以使病态资源数字和网络,并实现视觉数据的永久存储和同步浏览处理,而不限制时间和空间。它已广泛用于各种病理领域。然而,仍然缺乏开放式和通用的数字病理平台,可以帮助医生在数字病理部分的管理和分析中,以及相关患者信息的管理和结构化描述。大多数平台无法集成图像查看,注释和分析以及文本信息管理。为了解决上述问题,我们提出了一个全面而可扩展的平台PIMIP。我们的PIMIP基于数字病理部分的可视化开发了图像注释功能。我们的注释功能支持多用户协作注释和多设备注释,并实现某些注释任务的自动化。在注释任务中,我们邀请了一个专业的病理学家进行了指导。我们介绍了一种用于图像分析的机器学习模块。我们收集的数据包括来自当地医院和临床示例的公共数据。我们的平台更临床,适合临床使用。除了图像数据外,还构建了文本信息的管理和显示。所以我们的平台是全面的。平台框架是以模块化的方式构建的,以支持用户独立添加机器学习模块,这使我们的平台可扩展。
translated by 谷歌翻译
在共享潜在空间中对齐两个或更多个分布的无监督任务具有许多应用,包括公平表示,批量效果缓解和无监督域适应。现有的基于流动的方法独立估计多个流动,这相当于学习多个完整的生成模型。其他方法需要对抗性学习,这可以是可以计算地昂贵和挑战的优化。因此,我们的目标是在避免对抗性学习的同时联合对齐多个分布。通过从最佳运输(OT)理论的高效对准算法的启发,我们开发了一种简单的迭代方法来构建深层和富有效力的流动。我们的方法将每次迭代分成两个子问题:1)形成分配分配的变化近似,并且2)通过基于已知的OT结果的闭合形式可逆对准映射最小化该变分近似。我们的经验结果证明了这种迭代算法以低计算成本实现了竞争分布对准,同时能够自然地处理两个以上的分布。
translated by 谷歌翻译
Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets.
translated by 谷歌翻译
Brain midline shift (MLS) is one of the most critical factors to be considered for clinical diagnosis and treatment decision-making for intracranial hemorrhage. Existing computational methods on MLS quantification not only require intensive labeling in millimeter-level measurement but also suffer from poor performance due to their dependence on specific landmarks or simplified anatomical assumptions. In this paper, we propose a novel semi-supervised framework to accurately measure the scale of MLS from head CT scans. We formulate the MLS measurement task as a deformation estimation problem and solve it using a few MLS slices with sparse labels. Meanwhile, with the help of diffusion models, we are able to use a great number of unlabeled MLS data and 2793 non-MLS cases for representation learning and regularization. The extracted representation reflects how the image is different from a non-MLS image and regularization serves an important role in the sparse-to-dense refinement of the deformation field. Our experiment on a real clinical brain hemorrhage dataset has achieved state-of-the-art performance and can generate interpretable deformation fields.
translated by 谷歌翻译
People with diabetes are more likely to develop diabetic retinopathy (DR) than healthy people. However, DR is the leading cause of blindness. At present, the diagnosis of diabetic retinopathy mainly relies on the experienced clinician to recognize the fine features in color fundus images. This is a time-consuming task. Therefore, in this paper, to promote the development of UW-OCTA DR automatic detection, we propose a novel semi-supervised semantic segmentation method for UW-OCTA DR image grade assessment. This method, first, uses the MAE algorithm to perform semi-supervised pre-training on the UW-OCTA DR grade assessment dataset to mine the supervised information in the UW-OCTA images, thereby alleviating the need for labeled data. Secondly, to more fully mine the lesion features of each region in the UW-OCTA image, this paper constructs a cross-algorithm ensemble DR tissue segmentation algorithm by deploying three algorithms with different visual feature processing strategies. The algorithm contains three sub-algorithms, namely pre-trained MAE, ConvNeXt, and SegFormer. Based on the initials of these three sub-algorithms, the algorithm can be named MCS-DRNet. Finally, we use the MCS-DRNet algorithm as an inspector to check and revise the results of the preliminary evaluation of the DR grade evaluation algorithm. The experimental results show that the mean dice similarity coefficient of MCS-DRNet v1 and v2 are 0.5161 and 0.5544, respectively. The quadratic weighted kappa of the DR grading evaluation is 0.7559. Our code will be released soon.
translated by 谷歌翻译
Zero-Shot Learning has been a highlighted research topic in both vision and language areas. Recently, most existing methods adopt structured knowledge information to model explicit correlations among categories and use deep graph convolutional network to propagate information between different categories. However, it is difficult to add new categories to existing structured knowledge graph, and deep graph convolutional network suffers from over-smoothing problem. In this paper, we provide a new semantic enhanced knowledge graph that contains both expert knowledge and categories semantic correlation. Our semantic enhanced knowledge graph can further enhance the correlations among categories and make it easy to absorb new categories. To propagate information on the knowledge graph, we propose a novel Residual Graph Convolutional Network (ResGCN), which can effectively alleviate the problem of over-smoothing. Experiments conducted on the widely used large-scale ImageNet-21K dataset and AWA2 dataset show the effectiveness of our method, and establish a new state-of-the-art on zero-shot learning. Moreover, our results on the large-scale ImageNet-21K with various feature extraction networks show that our method has better generalization and robustness.
translated by 谷歌翻译
Current mainstream object detection methods for large aerial images usually divide large images into patches and then exhaustively detect the objects of interest on all patches, no matter whether there exist objects or not. This paradigm, although effective, is inefficient because the detectors have to go through all patches, severely hindering the inference speed. This paper presents an Objectness Activation Network (OAN) to help detectors focus on fewer patches but achieve more efficient inference and more accurate results, enabling a simple and effective solution to object detection in large images. In brief, OAN is a light fully-convolutional network for judging whether each patch contains objects or not, which can be easily integrated into many object detectors and jointly trained with them end-to-end. We extensively evaluate our OAN with five advanced detectors. Using OAN, all five detectors acquire more than 30.0% speed-up on three large-scale aerial image datasets, meanwhile with consistent accuracy improvements. On extremely large Gaofen-2 images (29200$\times$27620 pixels), our OAN improves the detection speed by 70.5%. Moreover, we extend our OAN to driving-scene object detection and 4K video object detection, boosting the detection speed by 112.1% and 75.0%, respectively, without sacrificing the accuracy. Code is available at https://github.com/Ranchosky/OAN.
translated by 谷歌翻译